5 research outputs found

    A Modelling Approach to Multi-Domain Traceability

    Get PDF
    Traceability is an important concern in projects that span different engineering domains. Traceability can also be mandated, exploited and man- aged across the engineering lifecycle, and may involve defining connections between heterogeneous models. As a result, traceability can be considered to be multi-domain. This thesis introduces the concept and challenges of multi-domain trace- ability and explains how it can be used to support typical traceability scenarios. It proposes a model-based approach to develop a traceability solution which effectively operates across multiple engineering domains. The approach introduced a collection of tasks and structures which address the identified challenges for a traceability solution in multi-domain projects. The proposed approach demonstrates that modelling principles and MDE techniques can help to address current challenges and consequently improve the effectiveness of a multi-domain traceability solution. A prototype of the required tooling to support the approach is implemented with EMF and atop Epsilon; it consists of an implementation of the proposed structures (models) and model management operations to sup- port traceability. Moreover, the approach is illustrated in the context of two safety-critical projects where multi-domain traceability is required to underpin certification arguments

    Near Failure Analysis using Dynamic Behavioural Data

    No full text
    Automated testing is a safeguard against software regres- sion and provides huge benefits. However, it is yet a challenging subject. Among others, there is a risk that the test cases are too specific, thus making them inefficient. There are many forms of undesirable behaviour that are compatible with a typical program’s specification, that however, harm users. An efficient test should provide most possible information in relation to the resources spent. This paper introduces near failure analysis which complements testing activities by analysing dynamic behavioural metrics (e.g., execution time) in addition to explicit output values. The approach employs machine learning (ML) for classifying the behaviour of a program as faulty or healthy based on dynamic data gathered throughout its executions over time. An ML-based model is designed and trained to predict whether or not an arbitrary version of a program is at risk of failure. The very preliminary evaluation demon- strates promising results for feasibility and effectiveness of near failure analysis
    corecore